Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract As the use of artificial intelligence (AI) has grown exponentially across a wide variety of science applications, it has become clear that it is critical to share data and code to facilitate reproducibility and innovation. AMS recently adopted the requirement that all papers include an availability statement. However, there is no requirement to ensure that the data and code are actually freely accessible during and after publication. Studies show that without this requirement, data is openly available in about a third to a half of journal articles. In this work, we surveyed two AMS journals, Artificial Intelligence for the Earth Systems (AIES) and Monthly Weather Review (MWR), and two non-AMS journals. These journals varied in primary topic foci, publisher, and requirement of an availability statement. We examined the extent to which data and code are stated to be available in all four journals, if readers could easily access the data and code, and what common justifications were provided for articles without open data or code. Our analysis found that roughly 75% of all articles that produced data and had an availability statement made at least some of their data openly available. Code was made openly available less frequently in three out of the four journals examined. Access was inhibited to data or code in approximately 15% of availability statement that contained at least one link. Finally, the most common justifications for not making data or code openly available referenced dataset size and restrictions of availability from non-co-author entities.more » « less
-
Abstract The benefits of collaboration between the research and operational communities during the research-to-operations (R2O) process have long been documented in the scientific literature. Operational forecasters have a practiced, expert insight into weather analysis and forecasting but typically lack the time and resources for formal research and development. Conversely, many researchers have the resources, theoretical knowledge, and formal experience to solve complex meteorological challenges but lack an understanding of operation procedures, needs, requirements, and authority necessary to effectively bridge the R2O gap. Collaboration then serves as the most viable strategy to further a better understanding and improved prediction of atmospheric processes via ongoing multi-disciplinary knowledge transfer between the research and operational communities. However, existing R2O processes leave room for improvement when it comes to collaboration throughout a new product’s development cycle. This study assesses the subjective importance of collaboration at various stages of product development via a survey presented to participants of the 2021 Hazardous Weather Testbed Spring Forecasting Experiment. This feedback is then applied to create a proposed new R2O workflow that combines components from existing R2O procedures and modern co-production philosophies.more » « less
-
Abstract FrontFinder artificial intelligence (AI) is a novel machine learning algorithm trained to detect cold, warm, stationary, and occluded fronts and drylines. Fronts are associated with many high-impact weather events around the globe. Frontal analysis is still primarily done by human forecasters, often implementing their own rules and criteria for determining front positions. Such techniques result in multiple solutions by different forecasters when given identical sets of data. Numerous studies have attempted to automate frontal analysis through numerical frontal analysis. In recent years, machine learning algorithms have gained more popularity in meteorology due to their ability to learn complex relationships. Our algorithm was able to reproduce three-quarters of forecaster-drawn fronts over CONUS and NOAA’s unified surface analysis domain on independent testing datasets. We applied permutation studies, an explainable artificial intelligence method, to identify the importance of each variable for each front type. The permutation studies showed that the most “important” variables for detecting fronts are consistent with observed processes in the evolution of frontal boundaries. We applied the model to an extratropical cyclone over the central United States to see how the model handles the occlusion process, with results showing that the model can resolve the early stages of occluded fronts wrapping around cyclone centers. While our algorithm is not intended to replace human forecasters, the model can streamline operational workflows by providing efficient frontal boundary identification guidance. FrontFinder has been deployed operationally at NOAA’s Weather Prediction Center. Significance StatementFrontal boundaries drive many high-impact weather events worldwide. Identification and classification of frontal boundaries is necessary to anticipate changing weather conditions; however, frontal analysis is still mainly performed by human forecasters, leaving room for subjective interpretations during the frontal analysis process. We have introduced a novel machine learning method that identifies cold, warm, stationary, and occluded fronts and drylines without the need for high-end computational resources. This algorithm can be used as a tool to expedite the frontal analysis process by ingesting real-time data in operational environments.more » « less
-
Abstract As an increasing number of machine learning (ML) products enter the research-to-operations (R2O) pipeline, researchers have anecdotally noted a perceived hesitancy by operational forecasters to adopt this relatively new technology. One explanation often cited in the literature is that this perceived hesitancy derives from the complex and opaque nature of ML methods. Because modern ML models are trained to solve tasks by optimizing a potentially complex combination of mathematical weights, thresholds, and nonlinear cost functions, it can be difficult to determine how these models reach a solution from their given input. However, it remains unclear to what degree a model’s transparency may influence a forecaster’s decision to use that model or if that impact differs between ML and more traditional (i.e., non-ML) methods. To address this question, a survey was offered to forecaster and researcher participants attending the 2021 NOAA Hazardous Weather Testbed (HWT) Spring Forecasting Experiment (SFE) with questions about how participants subjectively perceive and compare machine learning products to more traditionally derived products. Results from this study revealed few differences in how participants evaluated machine learning products compared to other types of guidance. However, comparing the responses between operational forecasters, researchers, and academics exposed notable differences in what factors the three groups considered to be most important for determining the operational success of a new forecast product. These results support the need for increased collaboration between the operational and research communities. Significance StatementParticipants of the 2021 Hazardous Weather Testbed Spring Forecasting Experiment were surveyed to assess how machine learning products are perceived and evaluated in operational settings. The results revealed little difference in how machine learning products are evaluated compared to more traditional methods but emphasized the need for explainable product behavior and comprehensive end-user training.more » « less
-
Abstract AI-based algorithms are emerging in many meteorological applications that produce imagery as output, including for global weather forecasting models. However, the imagery produced by AI algorithms, especially by convolutional neural networks (CNNs), is often described as too blurry to look realistic, partly because CNNs tend to represent uncertainty as blurriness. This blurriness can be undesirable since it might obscure important meteorological features. More complex AI models, such as Generative AI models, produce images that appear to be sharper. However, improved sharpness may come at the expense of a decline in other performance criteria, such as standard forecast verification metrics. To navigate any trade-off between sharpness and other performance metrics it is important to quantitatively assess those other metrics along with sharpness. While there is a rich set of forecast verification metrics available for meteorological images, none of them focus on sharpness. This paper seeks to fill this gap by 1) exploring a variety of sharpness metrics from other fields, 2) evaluating properties of these metrics, 3) proposing the new concept of Gaussian Blur Equivalence as a tool for their uniform interpretation, and 4) demonstrating their use for sample meteorological applications, including a CNN that emulates radar imagery from satellite imagery (GREMLIN) and an AI-based global weather forecasting model (GraphCast).more » « less
-
Abstract Artificial Intelligence applications are rapidly expanding across weather, climate, and natural hazards. AI can be used to assist with forecasting weather and climate risks, including forecasting both the chance that a hazard will occur and the negative impacts from it, which means AI can help protect lives, property, and livelihoods on a global scale in our changing climate. To ensure that we are achieving this goal, the AI must be developed to be trustworthy, which is a complex and multifaceted undertaking. We present our work from the NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES), where we are taking a convergence research approach. Our work deeply integrates across AI, environmental, and risk communication sciences. This involves collaboration with professional end-users to investigate how they assess the trustworthiness and usefulness of AI methods for forecasting natural hazards. In turn, we use this knowledge to develop AI that is more trustworthy. We discuss how and why end-users may trust or distrust AI methods for multiple natural hazards, including winter weather, tropical cyclones, severe storms, and coastal oceanography.more » « less
-
Abstract This paper illustrates the lessons learned as we applied the Unet3+ deep learning model to the task of building an operational model for predicting wildfire occurrence for the contiguous United States (CONUS) in the 1-to-10-day range. Through the lens of model performance, we explore the reasons for performance improvements made possible by the model. Lessons include the importance of labeling, the impact of information loss in input variables, and the role of operational considerations in the modeling process. This work offers lessons learned for other interdisciplinary researchers working at the intersection of deep learning and fire occurrence prediction with an eye towards operationalization.more » « less
-
Abstract The purpose of this research is to build an operational model for predicting wildfire occurrence for the contiguous United States (CONUS) in the 1-to-10-day range using the UNet3+ machine-learning model. This paper illustrates the range of model performance resulting from choices made in the modeling process, such as how labels are defined for the model, and how input variables are codified for the model. By combining the capabilities of the UNet3+ model with a neighborhood loss function, Fractions Skill Score (FSS), we can quantify model success by predictions made both in and around the location of the original fire occurrence label. The model is trained on weather, weather-derived fuel, and topography observational inputs and labels representing fire occurrence. Observational weather, weather-derived fuel, and topography data are sourced from the gridMET data set, a daily, CONUS-wide, high-spatial resolution data set of surface meteorological variables. Fire occurrence labels are sourced from the U.S. Department of Agricultures Fire Program Analysis Fire-Occurrence Database (FPA-FOD), which contains spatial wildfire occurrence data for CONUS, combining data sourced from the reporting systems of federal, state, and local organizations. By exploring the many aspects of the modeling process with the added context of model performance, this work builds understanding around the use of deep learning to predict fire occurrence in CONUS.more » « less
An official website of the United States government
